Computation with Cliiord Valued Feed-forward Networks
نویسنده
چکیده
Recent research has focused on feed-forward networks with complex weights and activation values such as GK92, Hir92b, Hir92a, Hir93]. This paper extends this formalism to feed-forward networks with weight and activation values taken from a Cliiord algebra (see also PB92, PB94b]). A Cliiord algebra is a multi-dimensional generalization of the complex numbers and the Quaternions. Essentially a Cliiord algebra is obtained by extending vector spaces to allow an associative multiplication compatible with the natural metric on the vector space. This paper presents an extension of the well known back-error propagation algorithm to Cliiord valued feed-forward networks, and presents some experimental results with simple encoder-decoder problems. A discussion of the diierence between real and Cliiord valued networks is also included. Finally a Universal Approximation similar to the results found in HSW89] is proved.
منابع مشابه
Back Propagation in a Cli
Recent work has shown that in some cases the phase information of synaptic signal is important in the learning and representation capabilities of networks. Modelling such information with complex valued activation signals is possible, and indeed complex back-propagation algorithms have been derived 3]. Cliiord algebras give a way to generalise complex numbers to many dimensions. This paper pres...
متن کاملComparison of the Complex Valued and Real Valued Neural Networks Trained with Gradient Descent and Random Search Algorithms
Complex Valued Neural Network is one of the open topics in the machine learning society. In this paper we will try to go through the problems of the complex valued neural networks gradients computations by combining the global and local optimization algorithms. The outcome of the current research is the combined global-local algorithm for training the complex valued feed forward neural network ...
متن کاملApproximating the Semantics of Logic Programs by RecurrentNeural
; Abstract. In 8] we have shown how to construct a 3{layer recurrent neural network that computes the iteration of the meaning function T P of a given propositional logic program, what corresponds to the computation of the semantics of the program. In this article we deene a notion of approximation for interpretations and prove that there exists a 3{layer feed forward neural network that approx...
متن کاملOne-to-many mappings represented on feed-forward networks
Multiplayer perceptrons or feed-forward networks are generally trained to represent functions or many-to-one (m-o) mappings. This creates a problem if the training data exhibits the property of many-to-many or almost many-many valued-ness because the model, which generated the data, was many-to-many. Therefore in this paper a modified feed-forward network and training algorithm is considered to...
متن کاملTeaching Feed-Forward Neural Networks by Simulated Annealing
Simulated ann ealing is applied to the problem of teachin g feed-forward neural networks with discret e-valued weights . Network performance is optimized by repea ted present ation of tr aining data at lower and lower temperatures. Several examples, including the parity and "clump-recognition " problem s are treated, scaling with network complexity is discussed , and the viabilit y of mean -fie...
متن کامل